19 research outputs found
Generic Correlation Increases Noncoherent MIMO Capacity
We study the high-SNR capacity of MIMO Rayleigh block-fading channels in the
noncoherent setting where neither transmitter nor receiver has a priori channel
state information. We show that when the number of receive antennas is
sufficiently large and the temporal correlation within each block is "generic"
(in the sense used in the interference-alignment literature), the capacity
pre-log is given by T(1-1/N) for T<N, where T denotes the number of transmit
antennas and N denotes the block length. A comparison with the widely used
constant block-fading channel (where the fading is constant within each block)
shows that for a large block length, generic correlation increases the capacity
pre-log by a factor of about four.Comment: To be presented at IEEE Int. Symp. Inf. Theory (ISIT) 2013, Istanbul,
Turke
Information Bottleneck on General Alphabets
We prove rigorously a source coding theorem that can probably be considered
folklore, a generalization to arbitrary alphabets of a problem motivated by the
Information Bottleneck method. For general random variables , we show
essentially that for some , a function with rate limit
and exists if and only if there is a
random variable such that the Markov chain holds, and . The proof relies on the well established discrete case
and showcases a technique for lifting discrete coding theorems to arbitrary
alphabets.Comment: extended version, presented at ISIT 2018, Vail, C
Lossy Compression of General Random Variables
This paper is concerned with the lossy compression of general random
variables, specifically with rate-distortion theory and quantization of random
variables taking values in general measurable spaces such as, e.g., manifolds
and fractal sets. Manifold structures are prevalent in data science, e.g., in
compressed sensing, machine learning, image processing, and handwritten digit
recognition. Fractal sets find application in image compression and in the
modeling of Ethernet traffic. Our main contributions are bounds on the
rate-distortion function and the quantization error. These bounds are very
general and essentially only require the existence of reference measures
satisfying certain regularity conditions in terms of small ball probabilities.
To illustrate the wide applicability of our results, we particularize them to
random variables taking values in i) manifolds, namely, hyperspheres and
Grassmannians, and ii) self-similar sets characterized by iterated function
systems satisfying the weak separation property
Oversampling Increases the Pre-Log of Noncoherent Rayleigh Fading Channels
We analyze the capacity of a continuous-time, time-selective, Rayleigh
block-fading channel in the high signal-to-noise ratio (SNR) regime. The fading
process is assumed stationary within each block and to change independently
from block to block; furthermore, its realizations are not known a priori to
the transmitter and the receiver (noncoherent setting). A common approach to
analyzing the capacity of this channel is to assume that the receiver performs
matched filtering followed by sampling at symbol rate (symbol matched
filtering). This yields a discrete-time channel in which each transmitted
symbol corresponds to one output sample. Liang & Veeravalli (2004) showed that
the capacity of this discrete-time channel grows logarithmically with the SNR,
with a capacity pre-log equal to . Here, is the number of
symbols transmitted within one fading block, and is the rank of the
covariance matrix of the discrete-time channel gains within each fading block.
In this paper, we show that symbol matched filtering is not a
capacity-achieving strategy for the underlying continuous-time channel.
Specifically, we analyze the capacity pre-log of the discrete-time channel
obtained by oversampling the continuous-time channel output, i.e., by sampling
it faster than at symbol rate. We prove that by oversampling by a factor two
one gets a capacity pre-log that is at least as large as . Since the
capacity pre-log corresponding to symbol-rate sampling is , our result
implies indeed that symbol matched filtering is not capacity achieving at high
SNR.Comment: To appear in the IEEE Transactions on Information Theor
Lossless Linear Analog Compression
We establish the fundamental limits of lossless linear analog compression by
considering the recovery of random vectors
from the noiseless linear
measurements
with
measurement matrix . Specifically,
for a random vector of arbitrary
distribution we show that can be recovered with
zero error probability from
linear measurements,
where denotes the lower
modified Minkowski dimension and the infimum is over all sets
with . This achievability statement holds for Lebesgue almost all measurement
matrices . We then show that -rectifiable random vectors---a
stochastic generalization of -sparse vectors---can be recovered with zero
error probability from linear measurements. From classical compressed
sensing theory we would expect to be necessary for successful
recovery of . Surprisingly, certain classes of
-rectifiable random vectors can be recovered from fewer than
measurements. Imposing an additional regularity condition on the distribution
of -rectifiable random vectors , we do get the
expected converse result of measurements being necessary. The resulting
class of random vectors appears to be new and will be referred to as
-analytic random vectors
Lossless Analog Compression
We establish the fundamental limits of lossless analog compression by
considering the recovery of arbitrary m-dimensional real random vectors x from
the noiseless linear measurements y=Ax with n x m measurement matrix A. Our
theory is inspired by the groundbreaking work of Wu and Verdu (2010) on almost
lossless analog compression, but applies to the nonasymptotic, i.e., fixed-m
case, and considers zero error probability. Specifically, our achievability
result states that, for almost all A, the random vector x can be recovered with
zero error probability provided that n > K(x), where K(x) is given by the
infimum of the lower modified Minkowski dimension over all support sets U of x.
We then particularize this achievability result to the class of s-rectifiable
random vectors as introduced in Koliander et al. (2016); these are random
vectors of absolutely continuous distribution---with respect to the
s-dimensional Hausdorff measure---supported on countable unions of
s-dimensional differentiable submanifolds of the m-dimensional real coordinate
space. Countable unions of differentiable submanifolds include essentially all
signal models used in the compressed sensing literature. Specifically, we prove
that, for almost all A, s-rectifiable random vectors x can be recovered with
zero error probability from n>s linear measurements. This threshold is,
however, found not to be tight as exemplified by the construction of an
s-rectifiable random vector that can be recovered with zero error probability
from n<s linear measurements. This leads us to the introduction of the new
class of s-analytic random vectors, which admit a strong converse in the sense
of n greater than or equal to s being necessary for recovery with probability
of error smaller than one. The central conceptual tools in the development of
our theory are geometric measure theory and the theory of real analytic
functions